Robust MPI Message Delivery with Guaranteed Resources
نویسندگان
چکیده
A mechanism for message delivery is at the core of any implementation of Message-Passing Interface (MPI) [1]. In a distributed memory computer, shared or not, it is most feasible to synchronize messages in the local memory of the destination process. Accordingly, the initial step in message delivery is for the source process to transmit a message envelope a small packet containing synchronization variables and possibly other information (p. 19) to the destination. Whether or not an MPI implementation provides buffer space (p. 27, ll. 26-32) (buffer space is for the message data, the application's data), it must store envelopes in the destination local memory when there is no matching posted receive. As local memory is a limited resource, so too is message envelope space.
منابع مشابه
Scalable High Performance Message Passing over InfiniBand for Open MPI
InfiniBand (IB) is a popular network technology for modern high-performance computing systems. MPI implementations traditionally support IB using a reliable, connection-oriented (RC) transport. However, per-process resource usage that grows linearly with the number of processes, makes this approach prohibitive for large-scale systems. IB provides an alternative in the form of a connectionless u...
متن کاملDeductive Verification of Parallel Programs Using Why3
The Message Passing Interface specification (MPI) defines a portable message-passing API used to program parallel computers. MPI programs manifest a number of challenges on what concerns correctness: sent and expected values in communications may not match, resulting in incorrect computations possibly leading to crashes; and programs may deadlock resulting in wasted resources. Existing tools ar...
متن کاملA Peer-to-Peer Framework for Robust Execution of Message Passing Parallel Programs on Grids
This paper presents P2P-MPI, a middleware aimed at computational grids. From the programmer point of view, P2P-MPI provides a message-passing programming model which enables the development of MPI applications for grids. Its originality lies in its adaptation to unstable environments. First, the peer-to-peer design of P2P-MPI allows for a dynamic discovery of collaborating resources. Second, it...
متن کاملInferring Types for Parallel Programs
The Message Passing Interface (MPI) framework is widely used in implementing imperative programs that exhibit a high degree of parallelism. The PARTYPES approach proposes a behavioural type discipline for MPI-like programs in which a type describes the communication protocol followed by the entire program. Well-typed programs are guaranteed to be exempt from deadlocks. In this paper we describe...
متن کاملMPI- and CUDA- implementations of modal finite difference method for P-SV wave propagation modeling
Among different discretization approaches, Finite Difference Method (FDM) is widely used for acoustic and elastic full-wave form modeling. An inevitable deficit of the technique, however, is its sever requirement to computational resources. A promising solution is parallelization, where the problem is broken into several segments, and the calculations are distributed over different processors. ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1995